This is a quick post to talk a little bit about what I’m planning to focus on in the near and medium-term future, and to highlight that I’m currently hiring for a joint executive and research assistant position. You can read more about the role and apply here! If you’re potentially interested, hopefully the comments below can help you figure out whether you’d enjoy the role.
Recent advances in AI, combined with economic modelling (e.g. here), suggest that we might well face explosive AI-driven growth in technological capability in the next decade, where what would have been centuries of technological and intellectual progress on a business-as-usual trajectory occur over the course of just months or years.
Most effort to date, from those worried by an intelligence explosion, has been on ensuring that AI systems are aligned: that they do what their designers intend them to do, at least closely enough that they don’t cause catastrophically bad outcomes.
But even if we make sufficient progress on alignment, humanity will have to make, or fail to make, many hard-to-reverse decisions with important and long-lasting consequences. I call these decisions GrandChallenges. Over the course of an explosion in technological capability, we will have to address many Grand Challenges in a short space of time including, potentially: what rights to give digital beings; how to govern the development of many new weapons of mass destruction; who gets control over an automated military; how to deal with fast-reproducing human or AI citizens; how to maintain good reasoning and decision-making even despite powerful persuasion technology and greatly-improved ability to ideologically indoctrinate others; and how to govern the race for space resources.
As a comparison, we could imagine if explosive growth had occurred in Europe in the 11th century, and that all the intellectual and technological advances that took a thousand years in our actual history occurred over the course of just a few years. It’s hard to see how decision-making would go well under those conditions.
The governance of explosive growth seems to me to be of comparable importance as AI alignment, not dramatically less tractable, and is currently much more neglected. The marginal cost-effectiveness of work in this area therefore seems to be even higher than marginal work on AI alignment. It is, however, still very pre-paradigmatic: it’s hard to know what’s most important in this area, what things would be desirable to push on, or even what good research looks like.
I’ll talk more about all this in my EAG: Bay Area talk, “New Frontiers in Effective Altruism.” I’m far from the only person to highlight these issues, though. For example, Holden Karnofsky has an excellent blog post on issues beyond misalignment; Lukas Finnveden has a great post on similar themes here and an extensive and in-depth series on potential projects here. More generally, I think there’s a lot of excitement about work in this broad area that isn’t yet being represented in places like the Forum. I’d be keen for more people to start learning about and thinking about these issues.
Over the last year, I’ve done a little bit of exploratory research into some of these areas; over the next six months, I plan to continue this in a focused way, with an eye toward making this a multi-year focus. In particular, I’m interested in the rights of digital beings, governance of space resources, and, above all, on the “meta” challenge of ensuring that we have good deliberative processes through the period of explosive growth. (One can think of work on the meta challenge as fleshing out somewhat realistic proposals that could take us in the direction of the “long reflection”.) By working on good deliberative processes, we could thereby improve decision-making on all the Grand Challenges we will face. This work could also help with AI safety, too: if we can guarantee power-sharing after the development of superintelligence, that decreases the incentive for competitors to race and cut corners on safety.
I’m not sure yet what output this would ultimately lead to, if I decide to continue work on this beyond the next six months. Plausibly there could be many possible books, policy papers, or research institutes on these issues, and I’d be excited to help make happen whichever of these seem highest-impact after further investigation.
Beyond this work, I’ll continue to provide support for individuals and organisations in EA (such as via fundraising, advice, advocacy and passing on opportunities) in an 80/20 way; most likely, I’ll just literally allocate 20% of my time to this, and spend the remaining 80% on the ethics and governance issues I list above. I expect not to be very involved with organisational decision-making (for example by being on boards of EA organisations) in the medium term, in order to stay focused and play to my comparative advantage.
I’m looking for a joint research and executive assistant to help with the work outlined above. The role involves research tasks such as providing feedback on drafts, conducting literature reviews and small research projects, as well as administrative tasks like processing emails, scheduling, and travel booking. The role could also turn into a more senior role, depending on experience and performance.
Example projects that a research assistant could help with include:
A literature review on the drivers of moral progress.
A “literature review” focused on reading through LessWrong, the EA Forum, and other blogs, and finding the best work there related to the fragility of value thesis.
Case studies on: What exactly happened to result in the creation of the UN, and the precise nature of the UN Charter? What can we learn from it? Similarly for The Kyoto Protocol, the Nuclear Non-Proliferation Agreement, the Montreal Protocol.
Short original research projects, such as:
Figuring out what a good operationalisation of transformative AI would be, for the purpose of creating an early tripwire to alert the world of an imminent intelligence explosion.
Taking some particular neglected Grand Challenge, and fleshing out the reasons why this Grand Challenge might or might not be a big deal.
Supposing that the US wanted to make an agreement to share power and respect other countries’ sovereignty in the event that it develops superintelligence, figuring out how we could legibly guarantee future compliance with that agreement, such that the commitment is credible to other countries?
The deadline for applications is February the 11th. If this seems interesting, please apply!
As someone who is a) skeptical of X-risk from AI, but b) think there is a non-negligible (even if relatively low, maybe 3-4%) chance we'll see 100 years of progress in 15 years at some point in the next 50 years, I'm glad you're looking at this.
Thanks! Didn't know you're sceptical of AI x-risk. I wonder if there's a correlation between being a philosopher and having low AI x-risk estimates; it seems that way anecdotally.
Yeah. I actually work on it right now (governance/forecasting not technical stuff obviously) because it's the job that I managed to get when I really needed a job (and its interesting), but I remain personally skeptical. Though it is hard to tell the difference in such a speculative context between 1 in 1000 (which probably means it is actually worth working on in expectation, at least if you expect X-risk to drop dramatically if AI is negotiated successfully and have totalist sympathies in population ethics) and 1 in 1 million* (which might look worth working on in expectation if taken literally, but is probably really a signal that it might be way lower for all you know.) I don't have anything terribly interesting to say about why I'm skeptical: just boring stuff about how prediction is hard, and your prior should be low on a very specific future path, and social epistemology worries about bubbles and ideas that pattern match to religious/apocalyptic, combined with a general feeling that the AI risk stuff I have read is not rigorous enough to (edit, missing bit here) overcome my low prior.
'I wonder if there's a correlation between being a philosopher and having low AI x-risk estimates; it seems that way anecdotally.'
I hadn't heard that suggested before. But you will have a much better idea of the distribution of opinion than me. My guess would be that the divide will be LW/rationalist verses not. "Low" is also ambiguous of course: compared to MIRI people, or even someone like Christiano, you, or Joe Carlsmith probably have "low" estimates, but they are likely a lot higher than AI X-risk "skeptics" outside EA.
What is especially interesting here is your focus on an all hazards approach to Grand Challenges. Improved governance has the potential to influence all cause areas, including long-term and short-term, x-risks, and s-risks.
Here at the Odyssean institute, we’re developing a novel approach to these deep questions of governing Grand Challenges. We’re currently running our first horizon scan on tipping points in global catastrophic risk and will use this as a first step of a longer-term process which will include Decision Making under Deep Uncertainty (developed at RAND), and a deliberative democratic jury or assembly. In our White Paper on the Odyssean Process, we outlined how their combination would be a great contribution to avoid short termist thinking in policy formulation around GCRs. We’re happy to see yourself and Open AI taking a keen interest in this flourishing area of deliberative democratic governance!
We are highly encouaged by the fact that you see it “of comparable importance as AI alignment, not dramatically less tractable, and is currently much more neglected. The marginal cost-effectiveness of work in this area therefore seems to be even higher than marginal work on AI alignment.” Despite this, the work remains neglected even within EA and thus would benefit from greater focus and support for more resources to be allocated to it. We’d welcome a chance to discuss this in a more in depth way with you and others interested in supporting it.
Figuring out what a good operationalisation of transformative AI would be, for the purpose of creating an early tripwire to alert the world of an imminent intelligence explosion.
FWIW many people are already very interested in capability evaluations related to AI acceleration of AI R&D.
For instance, at the UK AI Safety Institute, the Loss of Control team is interested in these evaluations.
Loss of control: As advanced AI systems become increasingly capable, autonomous, and goal-directed, there may be a risk that human overseers are no longer capable of effectively constraining the system’s behaviour. Such capabilities may emerge unexpectedly and pose problems should safeguards fail to constrain system behaviour. Evaluations will seek to avoid such accidents by characterising relevant abilities, such as the ability to deceive human operators, autonomously replicate, or adapt to human attempts to intervene. Evaluations may also aim to track the ability to leverage AI systems to create more powerful systems, which may lead to rapid advancements in a relatively short amount of time.
Build and lead a team focused on evaluating capabilities that are precursors to extreme harms from loss of control, with a current focus on autonomous replication and adaptation, and uncontrolled self-improvement.
As you are framing the choice between work on alignment and work on grand challenges/non-alignment work needed under transformative AI, I am curious how you think about pause efforts as a third class of work. Is this something you have thoughts on?
Perhaps at the core there is a theme here that comes up a lot which goes a bit like: Clearly there is a strong incentive to 'work on' any imminent and unavoidable challenge whose resolution could require or result in "hard-to-reverse decisions with important and long-lasting consequences". Current x-risks have been established as sort of the 'most obvious' such challenges (in the sense that making wrong decisions potentially results in extinction, which obviously counts as 'hard-to-reverse' and the consequences of which are 'long-lasting'). But can we think of any other such challenges or any other category of such challenges? I don't know of any others that I've found anywhere near as convincing as the x-risk case, but I suppose that's why the example project on case studies could be important?
Another thought I had is kind of: Why might people who have been concerned about x-risk from misaligned AI pivot to asking about these other challenges? (I'm not saying Will counts as 'pivoting' but just generally asking the question). I think one question I have in mind is: Is it because we have already reached a point of small (and diminishing) returns from putting today's resources into the narrower goal of reducing x-risk from misaligned AI?
Hey - I’m starting to post and comment more on the Forum than I have been, and you might be wondering about whether and when I’m going to respond to questions around FTX. So here’s a short comment to explain how I’m currently thinking about things:
The independent investigation commissioned by EV is still ongoing, and the firm running it strongly preferred me not to publish posts on backwards-looking topics around FTX while the investigation is still in-progress. I don’t know when it’ll be finished, or what the situation will be like for communicating on these topics even after it’s done.
I had originally planned to get out a backwards-looking post early in the year, and I had been holding off on talking about other things until that was published. That post has been repeatedly delayed, and I’m not sure when it’ll be able to come out. If I’d known that it would have been delayed this long, I wouldn’t have waited on it before talking on other topics, so I’m now going to start talking more than I have been, on the Forum and elsewhere; I’m hoping I can be helpful for some of the other issues that are currently active topics of discussion.
Briefly, though, and as I indicated before: I had no idea that Sam and others were misusing customer funds. Since November I’ve thought a lot about whether there were signs of this I really should have spotted, but even in hindsight I don’t think I had reason to suspect that that was happening.
Looking back, I wish I’d been far less trusting of Sam and those who’ve pleaded guilty. Looking forward, I’m going to be less likely to infer that, just because someone has sincere-seeming signals of being highly morally motivated, like being vegan or demonstrating credible plans to give away most of their wealth, they will have moral integrity in other ways, too.
I’m also more wary, now, of having such a high-trust culture within EA, especially as EA grows. This thought favours robust governance mechanisms even more than before ("trust but verify"), so that across EA we can have faith in organisations and institutions, rather than heavily relying on character judgement about the leaders of those organisations.
EA has grown enormously the last few years; in many ways it feels like an adolescent, in the process of learning how to deal with its newfound role in the world. I’m grateful that we’re in a moment of opportunity for us to think more about how to improve ourselves, including both how we work and how we think and talk about effective altruism.
As part of that broader set of reflections (especially around the issue of (de)centralisation in EA), I’m making some changes to how I operate, which I describe, along with some of the other changes happening across EA, in my post on decision-making and decentralisation here. First, I plan to distance myself from the idea that I’m “the face of” or “the spokesperson for” EA; this isn’t how I think of myself, and I don’t think that description reflects reality, but I’m sometimes portrayed that way. I think moving in the direction of clarity on this will better reflect reality and be healthier for both me and the movement.
Second, I plan to step down from the board of Effective Ventures UK once it has more capacity and has recruited more trustees. I found it tough to come to this decision: I’ve been on the board of EV UK (formerly CEA) for 11 years now, and I care deeply, in a very personal way, about the projects housed under EV UK, especially CEA, 80,000 Hours, and Giving What We Can. But I think it’s for the best, and when I do step down I’ll know that EV will be in good hands.
Over the next year, I’ll continue to do learning and research on global priorities and cause prioritisation, especially in light of the astonishing (and terrifying) developments in AI over the last year. And I’ll continue to advocate for EA and related ideas: for example, in September, WWOTF will come out in paperback in the US and UK, and will come out in Spanish, German, and Finnish that month, too. Given all that’s happened in the world in the last few years — including a major pandemic, war in Europe, rapid AI advances, and an increase in extreme poverty rates — it’s more important than ever to direct people, funding and clear thinking towards the world’s most important issues. I’m excited to continue to help make that happen.
Honestly, it does seem like it might be challenging, and I welcome ideas on things to do. (In particular, it might be hard without sacrificing lots of value in other ways. E.g. going on big-name podcasts can be very, very valuable, and I wouldn’t want to indefinitely avoid doing that - that would be too big a cost. More generally, public advocacy is still very valuable, and I still plan to be “a” public proponent of EA.)
The lowest-hanging fruit is just really hammering the message to journalists / writers I speak to; but there’s not a super tight correlation between what I say to journalists / writers and what they write about. Having others give opening / closing talks at EAG also seems like an easy win.
The ideal is that we build up a roster of EA-aligned public figures. I’ve been spending some time on that this year, providing even more advice / encouragement to potential public figures than before, and connecting them to my network. The last year has made it more challenging though, as there are larger costs to being an EA-promoting public figure than there were before, so it’s a less attractive prospect; at the same time, a lot of people are now focusing on AI in particular. But there are a number of people who I think could be excellent in this position.
“First, building up a solid roster of EA public figures will take a while - many years, at least. For example, suppose that someone decides to become a public figure and goes down a book-writing path. Writing a book typically takes a couple of years or more, then there’s a year between finishing the manuscript and publication. And people’s first books are rarely huge hits - someone’s public profile tends to build slowly over time. There are also just a few things that are hard and take time to replicate, like conventional status indicators (being a professor at a prestigious university).
Second, I don’t think we’re ever going to be able to get away from a dynamic where a handful of public figures are far more well-known than all others. Amount of public attention (as measured by, e.g. twitter followers) follows a power law. So if we try to produce a lot of potential public figures, just via the underlying dynamics we’ll probably get a situation where the most-well-known person is a lot more well-known than the next most-well-known person, and so on.
(The same dynamic is why it’s more or less inevitable that much or most funding in EA will come from a small handful of donors. Wealth follows a fat-tailed distribution; the people who are most able to donate will be able to donate far more than most people. Even for GiveWell, which is clearly aimed at “retail” donors, 51% of their funds raised came from a single donor — Open Philanthropy.)
It’s also pretty chance-y who gets the most attention at any one time. WWOTF ended up getting a lot more media attention than The Precipice; this could easily have been the other way around. It certainly didn’t have anything to do with the intrinsic quality of the two books; whereas out-of-control factors like The Precipice being published right after COVID made a significant difference. Toby also got a truly enormous amount of media attention in both 2009 and 2010 (I think he was the most-read news story on the BBC both times); if that had happened now, he’d have a much larger public profile than he currently does.
All this is to say: progress here will take some time. A major success story for this plan would be that, in five years’ time, there are a couple more well-known EA figureheads, in addition to what we have now. That said, there are still things we can to make progress on this in the near term: have other people speaking with the media when they can; having other people do the opening talks at EAGs; and showcasing EAs who already have public platforms, like Toby Ord, or Natalie Cargill, who has an excellent TED talk that’s coming out this year.”
CEA distributes books at scale, right? Seems like offering more different books could boost name recognition of other authors and remove a signal of emphasis on you. This would be far from a total fix, but is very easy to implement.
I haven't kept up with recent books, but back in 2015 I preferred Nick Cooney's intro to EA book to both yours and Peter Singer's, and thought it was a shame it got a fraction of the attention.
Presumably it's easier to sell your own book than someone else's? I assume CEA is able to get a much better rate on The Precipice and What We Owe The Future than How To Be Great At Doing Good or The Most Good You Can Do. The Life You Can Save (the org) even bought the rights to The Life You Can Save (the book) to make it easier to distribute.
[Edit: This may have been a factor too/instead:
"In my personal case, all of the proceeds from the book — all of the advances and royalties — are going to organizations focused on making sure we have a long-term future." - Toby
"All proceeds from What We Owe The Future are donated to the Longtermism Fund" - Will
I can't find anything similar for Peter's or Nick's books.]
It will always be easier to promote nearby highly popular people than farther away, lesser known people. One person being the "face" is the natural outcome of that dynamic. If you want a diverse field you need to promote other people even when it's more effort in the short run.
If you want a diverse field you need to promote other people even when it's more effort in the short run.
Agreed, sorry, I should have been clearer: I was aiming to offer reasons for why Nick Cooney's book may have gotten a fraction of the attention to date (and, to a lesser extent, pushing back a bit on the idea that it would be "very easy to implement").
The rest of us can help by telling others Will MacAskill is seeking to divest himself of this reputation whenever we see or hear someone talking about him as if he still wants to be that person (not that he ever did, as evidenced by his above statement, a sentiment I've seen him express before in years past).
I'm glad that you are stepping down from EV UK and focusing more on global priorities and cause prioritisation (and engaging on this forum!). I have a feeling, given your philosophy background, that this will move you to focus more where you have a comparative advantage. I can't wait to read what you have to say about AI!
Thanks for your work, it's my sense you work really really hard and have done for a long time. Thank you
Thanks for the emotional effort. I guess that at times your part within EA is pretty sad, tiring, stressful. I'm sad if that happens to you
I sense you screwed up in trusting SBF and in someone not being on top of where the money was moving in FTXFF. It was an error. Seems worth calling an L an L (a loss a loss). This has caused a little harm to me personally and I forgive you. Sounds fatuous but feels important to say. I'm pretty confident if our roles were reversed I would have screwed it up much worse. I think many people would have - given how long it took for billions of $ to figure this out.
I don't know if the decision to step down is the right one. I acknowledge my prior is that you should but there is much more information than that that we don't have. I will say that you are almost uniquely skilled at the job and often I guess that people who were pretty good but made some big errors are better than people who were generally bad or who are unskilled. I leave that up to you, but seems worth saying
I sense, on balance that the thing that most confuses/concerns me is this line from the Time article
"Bouscal recalled speaking to Mac Aulay immediately after one of Mac Aulay’s conversations with MacAskill in late 2018. “Will basically took Sam’s side,” said Bouscal, who recalls waiting with Mac Aulay in the Stockholm airport while she was on the phone. (Bouscal and Mac Aulay had once dated; though no longer romantically involved, they remain close friends.) “Will basically threatened Tara,” Bouscal recalls. “I remember my impression being that Will was taking a pretty hostile stance here and that he was just believing Sam’s side of the story, which made no sense to me.”"
While other things may have been bigger errors, this once seems most sort of "out of character" or "bad normsy". And I know Naia well enough that this moves me a lot, even though it seems so out of character for you (maybe 30% that this is a broadly accurate account). This causes me consternation, I don't understand and I think if this happened it was really bad and behaviour like it should not happen from any powerful EAs (or any EAs frankly).
Again, I find this very hard to think about, and my priors are that there should be consequences for this - perhaps it does imply you aren't suitable for some of these roles. But I'm pretty open-minded to the idea that that isn't the case - we just know your flaws where we don't know most other peoples. The Copenhagen Theory of Leadership.
I don't really know how to integrate all these things into a coherent view of the world, so I will just leave them here jagged and hungry for explanation.
Thanks for your work, I am optimistic about the future, I wish you well in whatever you go forward and do
I don't know if the decision to step down is the right one. I acknowledge my prior is that you should but there is much more information than that that we don't have. I will say that you are almost uniquely skilled at the job and often I guess that people who were pretty good but made some big errors are better than people who were generally bad or who are unskilled. I leave that up to you, but seems worth saying
I think it's important to consider that the nature of being on the EVF board over the next few years is likely to be much different than it was pre-FTX. No matter the result of the CC inquiry, EVF needs to consider itself as on the CC's radar for the next few years, and that means extra demands on trustees to handle corporate-governance type stuff. It sounds like a number of projects will spin off (which I think should happen), and the associated logistics will be a major source of board involvement. Plus there's all the FTX fallout, for which Will is recused anyway.
So there are a number of reasons someone might decide to step down, including that the post-FTX role just takes too much of their time, or that they don't have a comparative advantage in light of the new expected composition of the board's workload.
This is an ancillary point, but IMO it would be very unfair to focus too much on what Will personally did or did not know about FTX. There were plenty of other opportunities for other people with far less personal involvement to partially figure this one out, and some did so before the site's failure.
My own major redflag about FTX, for instance, was the employment of Dan Friedberg as their chief regulatory officer, a known liar and fraud-enabler from his involvement with the UltimateBet superusing scandal. Friedberg's executive role at FTX was public record, while the tapes that confirmed the degree of his involvement in the thefts at UltimateBet were leaked in 2013 and were widely publicized in the poker community. Some prominent EAs are even former professional poker players (Igor Kurganov and Liv Boeree).
Even just a few months before FTX's failure, enormous redflags were emerging everywhere. Due to the bankruptcy proceedings of crypto lender Voyager, it became public knowledge in July 2022 that Alameda Research owed them $377 million at the time of bankruptcy. The obvious conclusion was, that like Voyager's other outstanding debtor Three Arrows Capital, Alameda was insolvent (which we now know was in fact the the case). All this was public record and easy to find if you paid a bit of attention to crypto (i.e it was reported in the crypto press), which surely many EAs did at the time.
tl;dr This idea that only a small caste of elite EAs with access to privileged information about FTX could have made some good educated guesses about the potential risks does not stack up: there was plenty in the public record, and I suggest that EA collectively was very happy to look the other way as long as the money was flowing and SBF seemed like a nice EA vegan boy. No doubt Will & other elite EAs deserve some blame, but it would be very easy to try to pin too much on them.
Given how badly and broadly FTX was missed by a variety of actors, it's hard to assign much relative blame to anyone absent circumstances that distinguish their potential blame above the baseline:
Some people had access to significant non-public information that should have increased their assessment of the risk posed by FTX, above and beyond the publicly-available information.
Some people had a particularized duty to conduct due dilligence and form an assessment of FTX's risk (or to supervise someone to whom this duty was delegated). This duty would accrue from, e.g., a senior leadership role in an organization receiving large amounts of FTX funding, In other words, it was some people's job to think about FTX risk.
Your average EA met neither of these criteria. In contrast, I think these two criteria -- special knowledge and special responsibility -- are multiplicative (i.e., that the potential blame for someone meeting both criteria is much higher than for those who met only one).
Some people had access to significant non-public information that should have increased their assessment of the risk posed by FTX
Plausible. Also plausible that they also had access to info that decreased their assessment. Perhaps the extra info they had access to even suggested they should decrease their assessment overall. Or perhaps they didn't have access to any significant/relevant extra info.
It was some people's job to think about FTX risk
Agreed. But I think Benjamin_Todd offers a good reflection on this:
I’m unconvinced that there should have been much more scenario / risk planning. I think it was already obvious that FTX might fall 90% in a crypto bear market (e.g. here) – and if that was all that happened, things would probably be OK. What surprised people was the alleged fraud and that everything was so entangled it would all go to zero at once, and I’m skeptical additional risk surveying exercises would have ended up with a significant credence on these (unless a bunch of other things were different). There were already some risk surveying attempts and they didn’t get there (e.g. in early 2022, metaculus had a 1% chance of FTX making any default on customer funds over the year with ~40 forecasters).
I mentioned a few months ago that I was planning to resign from the board of EV UK: I’ve now officially done so.
Since last November, I’ve been recused from the board on all matters associated with FTX and related topics, which has ended up being a large proportion of board business. (This is because the recusal affected not just decisions that were directly related to the collapse of FTX, but also many other decisions for which the way EV UK has been affected by the collapse of FTX was important context.) I know I initially said that I’d wait for there to be more capacity, but trustee recruitment has moved more slowly than I’d anticipated, and with the ongoing recusal I didn’t expect to add much capacity for the foreseeable future, so it felt like a natural time to step down.
It’s been quite a ride over the last eleven years. Effective Ventures has grown to a size far beyond what I expected, and I’ve felt privileged to help it on that journey. I deeply respect the rest of the board, and the leadership teams at EV, and I’m glad they’re at the helm.
Some people have asked me what I’m currently working on, and what my plans are. This year has been quite spread over a number of different things, including fundraising, helping out other EA-adjacent public figures, support for GPI, CEA and 80,000 Hours, writing additions to What We Owe The Future and helping with the print textbook version of utilitarianism.net that’s coming out next year. It’s also personally been the toughest year of my life; my mental health has been at its worst in over a decade, and I’ve been trying to deal with that, too.
At the moment, I’m doing three main things:
- Some public engagement, in particular around the WWOTF paperback and foreign language book launches and at EAGxBerlin. This has been and will be lower-key than the media around WWOTF last year, and more focused on in-person events; I’m also more focused on fundraising than I was before.
- Research into "trajectory changes”: in particular, ways of increasing the wellbeing of future generations other than 'standard' existential risk mitigation strategies, in particular on issues that arise even if we solve AI alignment, like digital sentience and the long reflection. I’m also doing some learning to try to get to grips on how to update properly on the latest developments in AI, in particular with respect to the probability of an intelligence explosion in the next decade, and on how hard we should expect AI alignment to be.
- Gathering information for what I should focus on next. In the medium term, I still plan to be a public proponent of EA-as-an-idea, which I think plays to my comparative advantage, and because I’m worried about people neglecting “EA qua EA”. If anything, all the crises faced by EA and by the world in the last year has reminded me of just how deeply I believe in EA as a project, and how the message of taking a thoughtful, humble, and scientific approach to doing good is more important than ever. The precise options I’m considering are still quite wide-ranging, including: a podcast and/or YouTube show and/or substack; a book on effective giving; a book on evidence-based living; or deeper research into the ethics and governance questions that arise even if we solve AI alignment. I hope to decide on that by the end of the year.
Will - of course I have some lingering reservations but I do want to acknowledge how much you've changed and improved my life.
You definitely changed my life by co-creating Centre for Effective Altruism, which played a large role in organizations like Giving What We Can and 80,000 Hours, which is what drew me into EA. I was also very inspired by "Doing Good Better".
To get more personal -- you also changed my life when you told me in 2013 pretty frankly that my original plan to pursue a Political Science PhD wasn't very impactful and that I should consider 80,000 Hours career coaching instead, which I did.
You also changed my life by being open about taking antidepressants, which is ~90% of the reason why I decided to also consider taking antidepressants even though I didn't feel "depressed enough" (I definitely was). I felt like if you were taking them and you seemed normal / fine / not clearly and obviously depressed all the time yet benefitted from them that maybe I would also benefit them (I did). It really shattered a stereotype for me.
You're now an inspiration for me in terms of resilience. Having an impact journey isn't always everything up and up and up all the time. 2022 and 2023 were hard for me. I imagine they were much harder for you -- but you persevere, smile, and continue to show your face. I like that and want to be like that too.
Thank you for all your work, and I'm excited for your ongoing and future projects Will, they sound very valuable! But I hope and trust you will be giving equal attention to your well-being in the near-term. These challenges will need your skills, thoughtfulness and compassion for decades to come. Thank you for being so frank - I know you won't be alone in having found this last year challenging mental health-wise, and it can help to hear others be open about it.
Thanks for all your work over the last 11 years Will, and best of luck on your future projects. I have appreciated your expertise on and support of EA qua EA, and would be excited about you continuing to support that.
Thanks for all of your hard work on EV, Will! I’ve really appreciated your individual example of generosity and commitment, boldness, initiative-taking, and leadership. I feel like a lot of things would happen more slowly or less ambitiously---or not at all---if it weren’t for your ability to inspire others to dive in and act on the courage of their convictions. I think this was really important for Giving What We Can, 80,000 Hours, Centre for Effective Altruism, the Global Priorities Institute, and your books. Inspiration, enthusiasm, and positivity from you has been a force-multiplier on my own work, and in the lives of many others that I have worked with. I wish you all the best in your upcoming projects.
Thank you for all of your hard work over many years, Will. I've really valued your ability to slice through strategic movement-buliding questions, your care and clear communication, your positivity, and your ability to simply inspire massive projects off the ground. I think you've done a lot of good. I'm excited for you to look after yourself, reflect on what's next, and keep working towards a better world.
Thanks so much for all your hard work on CEA/EV over the many years. You have been such a driving force over the years in developing the ideas, the community, and the institutions we needed to help make it all work well. Much of that work over the years has happened through CEA/EV, and before that through Giving What We Can and 80,000 Hours before we'd set up CEA to house them, so this is definitely in some sense the end of an era for you (and for EV). But a lot of your intellectual work and vision has always transcended the particular organisations and I'm really looking forward to much more of that to come!
Thanks so much for your work, Will! I think this is the right decision given the circumstances and that will help EV move in a good direction. I know some mistakes were made but I still want to recognize your positive influence.
I'm eternally grateful for getting me to focus on the question of "how to do the most good with our limited resources?".
I remember how I first heard about EA.
The unassuming flyer taped to the philosophy building wall first caught my eye: “How to do the most good with your career?”
It was October 2013, midterms week at Tufts University, and I was hustling between classes, focused on nothing but grades and graduation. But that disarmingly simple question gave me pause. It felt like an invitation to think bigger.
Curiosity drew me to the talk advertised on the flyer by some Oxford professor named Will MacAskill. I arrived to find just two other students in the room. None of us knew that Will would become so influential.
What followed was no ordinary lecture, but rather a life-changing conversation that has stayed with me for the past decade. Will challenged us to zoom out and consider how we could best use our limited time and talents to positively impact the world. With humility and nuance, he focused not on prescribing answers, but on asking the right questions.
Each of us left that classroom determined to orient our lives around doing the most good. His talk sent me on a winding career journey guided by this question. I dabbled in climate change policy before finding my path in AI safety thanks to 80K's coaching.
Ten years later, I’m still asking myself that question Will posed back in 2013: How can I use my career to do the most good? It shapes every decision I make. (I'm arguably a bit too obsessed with it!). I know countless others can say the same.
So thank you, Will, for inspiring generations of people with your catalytic question. The ripples from that day continue to spread. Excited for what you'll do next!
Given the TIME article, I thought I should give you all an update. Even though I have major issues with the piece, I don’t plan to respond to it right now.
Since my last shortform post, I’ve done a bunch of thinking, updating and planning in light of the FTX collapse. I had hoped to be able to publish a first post with some thoughts and clarifications by now; I really want to get it out as soon as I can, but I won’t comment publicly on FTX at least until the independent investigation commissioned by EV is over. Unfortunately, I think that’s a minimum of 2 months, and I’m still sufficiently unsure on timing that I don’t want to make any promises on that front. I’m sorry about that: I’m aware that this will be very frustrating for you; it’s frustrating for me, too.
Thank you for sharing this. I think lots of us would be interested in hearing your take on that post, so it's useful to understand your (reasonable-sounding) rationale of waiting until the independent investigation is done.
Could you share the link to your last shortform post? (it seems like the words "last shortform post" are linking to the Time article again, which I'm assuming is a mistake?)
Unfortunately, I think that’s a minimum of 2 months, and I’m still sufficiently unsure on timing that I don’t want to make any promises on that front. I’m sorry about that: I’m aware that this will be very frustrating for you; it’s frustrating for me, too.
I’ve been thinking hard about whether to publicly comment more on FTX in the near term. Much for the reasons Holden gives here, and for some of the reasons given here, I’ve decided against saying any more than I’ve already said for now.
I’m still in the process of understanding what happened, and processing the new information that comes in every day. I'm also still working through my views on how I and the EA community could and should respond.
I know this might be dissatisfying, and I’m really sorry about that, but I think it’s the right call, and will ultimately lead to a better and more helpful response.
It's not the paramount concern and I doubt you'd want it to be, but I have thought several times that this might be pretty hard for you. I hope you (and all of the Future Fund team and, honestly all of the FTX team) are personally well, with support from people who care about you.
This is a quick post to talk a little bit about what I’m planning to focus on in the near and medium-term future, and to highlight that I’m currently hiring for a joint executive and research assistant position. You can read more about the role and apply here! If you’re potentially interested, hopefully the comments below can help you figure out whether you’d enjoy the role.
Recent advances in AI, combined with economic modelling (e.g. here), suggest that we might well face explosive AI-driven growth in technological capability in the next decade, where what would have been centuries of technological and intellectual progress on a business-as-usual trajectory occur over the course of just months or years.
Most effort to date, from those worried by an intelligence explosion, has been on ensuring that AI systems are aligned: that they do what their designers intend them to do, at least closely enough that they don’t cause catastrophically bad outcomes.
But even if we make sufficient progress on alignment, humanity will have to make, or fail to make, many hard-to-reverse decisions with important and long-lasting consequences. I call these decisions Grand Challenges. Over the course of an explosion in technological capability, we will have to address many Grand Challenges in a short space of time including, potentially: what rights to give digital beings; how to govern the development of many new weapons of mass destruction; who gets control over an automated military; how to deal with fast-reproducing human or AI citizens; how to maintain good reasoning and decision-making even despite powerful persuasion technology and greatly-improved ability to ideologically indoctrinate others; and how to govern the race for space resources.
As a comparison, we could imagine if explosive growth had occurred in Europe in the 11th century, and that all the intellectual and technological advances that took a thousand years in our actual history occurred over the course of just a few years. It’s hard to see how decision-making would go well under those conditions.
The governance of explosive growth seems to me to be of comparable importance as AI alignment, not dramatically less tractable, and is currently much more neglected. The marginal cost-effectiveness of work in this area therefore seems to be even higher than marginal work on AI alignment. It is, however, still very pre-paradigmatic: it’s hard to know what’s most important in this area, what things would be desirable to push on, or even what good research looks like.
I’ll talk more about all this in my EAG: Bay Area talk, “New Frontiers in Effective Altruism.” I’m far from the only person to highlight these issues, though. For example, Holden Karnofsky has an excellent blog post on issues beyond misalignment; Lukas Finnveden has a great post on similar themes here and an extensive and in-depth series on potential projects here. More generally, I think there’s a lot of excitement about work in this broad area that isn’t yet being represented in places like the Forum. I’d be keen for more people to start learning about and thinking about these issues.
Over the last year, I’ve done a little bit of exploratory research into some of these areas; over the next six months, I plan to continue this in a focused way, with an eye toward making this a multi-year focus. In particular, I’m interested in the rights of digital beings, governance of space resources, and, above all, on the “meta” challenge of ensuring that we have good deliberative processes through the period of explosive growth. (One can think of work on the meta challenge as fleshing out somewhat realistic proposals that could take us in the direction of the “long reflection”.) By working on good deliberative processes, we could thereby improve decision-making on all the Grand Challenges we will face. This work could also help with AI safety, too: if we can guarantee power-sharing after the development of superintelligence, that decreases the incentive for competitors to race and cut corners on safety.
I’m not sure yet what output this would ultimately lead to, if I decide to continue work on this beyond the next six months. Plausibly there could be many possible books, policy papers, or research institutes on these issues, and I’d be excited to help make happen whichever of these seem highest-impact after further investigation.
Beyond this work, I’ll continue to provide support for individuals and organisations in EA (such as via fundraising, advice, advocacy and passing on opportunities) in an 80/20 way; most likely, I’ll just literally allocate 20% of my time to this, and spend the remaining 80% on the ethics and governance issues I list above. I expect not to be very involved with organisational decision-making (for example by being on boards of EA organisations) in the medium term, in order to stay focused and play to my comparative advantage.
I’m looking for a joint research and executive assistant to help with the work outlined above. The role involves research tasks such as providing feedback on drafts, conducting literature reviews and small research projects, as well as administrative tasks like processing emails, scheduling, and travel booking. The role could also turn into a more senior role, depending on experience and performance.
Example projects that a research assistant could help with include:
The deadline for applications is February the 11th. If this seems interesting, please apply!
As someone who is a) skeptical of X-risk from AI, but b) think there is a non-negligible (even if relatively low, maybe 3-4%) chance we'll see 100 years of progress in 15 years at some point in the next 50 years, I'm glad you're looking at this.
Thanks! Didn't know you're sceptical of AI x-risk. I wonder if there's a correlation between being a philosopher and having low AI x-risk estimates; it seems that way anecdotally.
Yeah. I actually work on it right now (governance/forecasting not technical stuff obviously) because it's the job that I managed to get when I really needed a job (and its interesting), but I remain personally skeptical. Though it is hard to tell the difference in such a speculative context between 1 in 1000 (which probably means it is actually worth working on in expectation, at least if you expect X-risk to drop dramatically if AI is negotiated successfully and have totalist sympathies in population ethics) and 1 in 1 million* (which might look worth working on in expectation if taken literally, but is probably really a signal that it might be way lower for all you know.) I don't have anything terribly interesting to say about why I'm skeptical: just boring stuff about how prediction is hard, and your prior should be low on a very specific future path, and social epistemology worries about bubbles and ideas that pattern match to religious/apocalyptic, combined with a general feeling that the AI risk stuff I have read is not rigorous enough to (edit, missing bit here) overcome my low prior.
'I wonder if there's a correlation between being a philosopher and having low AI x-risk estimates; it seems that way anecdotally.'
I hadn't heard that suggested before. But you will have a much better idea of the distribution of opinion than me. My guess would be that the divide will be LW/rationalist verses not. "Low" is also ambiguous of course: compared to MIRI people, or even someone like Christiano, you, or Joe Carlsmith probably have "low" estimates, but they are likely a lot higher than AI X-risk "skeptics" outside EA.
*Seems too low to me, but I am of course biased.
Christiano says ~22% ("but you should treat these numbers as having 0.5 significant figures") without a time-bound; and Carlsmith says ">10%" (see bottom of abstract) by 2070. So no big difference there.
Fair point. Carlsmith said less originally.
Hi Will,
What is especially interesting here is your focus on an all hazards approach to Grand Challenges. Improved governance has the potential to influence all cause areas, including long-term and short-term, x-risks, and s-risks.
Here at the Odyssean institute, we’re developing a novel approach to these deep questions of governing Grand Challenges. We’re currently running our first horizon scan on tipping points in global catastrophic risk and will use this as a first step of a longer-term process which will include Decision Making under Deep Uncertainty (developed at RAND), and a deliberative democratic jury or assembly. In our White Paper on the Odyssean Process, we outlined how their combination would be a great contribution to avoid short termist thinking in policy formulation around GCRs. We’re happy to see yourself and Open AI taking a keen interest in this flourishing area of deliberative democratic governance!
We are highly encouaged by the fact that you see it “of comparable importance as AI alignment, not dramatically less tractable, and is currently much more neglected. The marginal cost-effectiveness of work in this area therefore seems to be even higher than marginal work on AI alignment.” Despite this, the work remains neglected even within EA and thus would benefit from greater focus and support for more resources to be allocated to it. We’d welcome a chance to discuss this in a more in depth way with you and others interested in supporting it.
FWIW many people are already very interested in capability evaluations related to AI acceleration of AI R&D.
For instance, at the UK AI Safety Institute, the Loss of Control team is interested in these evaluations.
Some quotes:
Introducing the AI Safety Institute:
Jobs
Thanks so much for those links, I hadn't seen them!
(So much AI-related stuff coming out every day, it's so hard to keep on top of everything!)
METR ‘Model Evaluation & Threat Research’ might also be worth mentioning. I wonder if there's a list of capability evaluations projects somewhere
Thanks for the update, Will!
As you are framing the choice between work on alignment and work on grand challenges/non-alignment work needed under transformative AI, I am curious how you think about pause efforts as a third class of work. Is this something you have thoughts on?
Perhaps at the core there is a theme here that comes up a lot which goes a bit like: Clearly there is a strong incentive to 'work on' any imminent and unavoidable challenge whose resolution could require or result in "hard-to-reverse decisions with important and long-lasting consequences". Current x-risks have been established as sort of the 'most obvious' such challenges (in the sense that making wrong decisions potentially results in extinction, which obviously counts as 'hard-to-reverse' and the consequences of which are 'long-lasting'). But can we think of any other such challenges or any other category of such challenges? I don't know of any others that I've found anywhere near as convincing as the x-risk case, but I suppose that's why the example project on case studies could be important?
Another thought I had is kind of: Why might people who have been concerned about x-risk from misaligned AI pivot to asking about these other challenges? (I'm not saying Will counts as 'pivoting' but just generally asking the question). I think one question I have in mind is: Is it because we have already reached a point of small (and diminishing) returns from putting today's resources into the narrower goal of reducing x-risk from misaligned AI?
Hey - I’m starting to post and comment more on the Forum than I have been, and you might be wondering about whether and when I’m going to respond to questions around FTX. So here’s a short comment to explain how I’m currently thinking about things:
The independent investigation commissioned by EV is still ongoing, and the firm running it strongly preferred me not to publish posts on backwards-looking topics around FTX while the investigation is still in-progress. I don’t know when it’ll be finished, or what the situation will be like for communicating on these topics even after it’s done.
I had originally planned to get out a backwards-looking post early in the year, and I had been holding off on talking about other things until that was published. That post has been repeatedly delayed, and I’m not sure when it’ll be able to come out. If I’d known that it would have been delayed this long, I wouldn’t have waited on it before talking on other topics, so I’m now going to start talking more than I have been, on the Forum and elsewhere; I’m hoping I can be helpful for some of the other issues that are currently active topics of discussion.
Briefly, though, and as I indicated before: I had no idea that Sam and others were misusing customer funds. Since November I’ve thought a lot about whether there were signs of this I really should have spotted, but even in hindsight I don’t think I had reason to suspect that that was happening.
Looking back, I wish I’d been far less trusting of Sam and those who’ve pleaded guilty. Looking forward, I’m going to be less likely to infer that, just because someone has sincere-seeming signals of being highly morally motivated, like being vegan or demonstrating credible plans to give away most of their wealth, they will have moral integrity in other ways, too.
I’m also more wary, now, of having such a high-trust culture within EA, especially as EA grows. This thought favours robust governance mechanisms even more than before ("trust but verify"), so that across EA we can have faith in organisations and institutions, rather than heavily relying on character judgement about the leaders of those organisations.
EA has grown enormously the last few years; in many ways it feels like an adolescent, in the process of learning how to deal with its newfound role in the world. I’m grateful that we’re in a moment of opportunity for us to think more about how to improve ourselves, including both how we work and how we think and talk about effective altruism.
As part of that broader set of reflections (especially around the issue of (de)centralisation in EA), I’m making some changes to how I operate, which I describe, along with some of the other changes happening across EA, in my post on decision-making and decentralisation here. First, I plan to distance myself from the idea that I’m “the face of” or “the spokesperson for” EA; this isn’t how I think of myself, and I don’t think that description reflects reality, but I’m sometimes portrayed that way. I think moving in the direction of clarity on this will better reflect reality and be healthier for both me and the movement.
Second, I plan to step down from the board of Effective Ventures UK once it has more capacity and has recruited more trustees. I found it tough to come to this decision: I’ve been on the board of EV UK (formerly CEA) for 11 years now, and I care deeply, in a very personal way, about the projects housed under EV UK, especially CEA, 80,000 Hours, and Giving What We Can. But I think it’s for the best, and when I do step down I’ll know that EV will be in good hands.
Over the next year, I’ll continue to do learning and research on global priorities and cause prioritisation, especially in light of the astonishing (and terrifying) developments in AI over the last year. And I’ll continue to advocate for EA and related ideas: for example, in September, WWOTF will come out in paperback in the US and UK, and will come out in Spanish, German, and Finnish that month, too. Given all that’s happened in the world in the last few years — including a major pandemic, war in Europe, rapid AI advances, and an increase in extreme poverty rates — it’s more important than ever to direct people, funding and clear thinking towards the world’s most important issues. I’m excited to continue to help make that happen.
I'm curious about ways you think to mitigate against being seen as the face of/spokesperson for EA
Honestly, it does seem like it might be challenging, and I welcome ideas on things to do. (In particular, it might be hard without sacrificing lots of value in other ways. E.g. going on big-name podcasts can be very, very valuable, and I wouldn’t want to indefinitely avoid doing that - that would be too big a cost. More generally, public advocacy is still very valuable, and I still plan to be “a” public proponent of EA.)
The lowest-hanging fruit is just really hammering the message to journalists / writers I speak to; but there’s not a super tight correlation between what I say to journalists / writers and what they write about. Having others give opening / closing talks at EAG also seems like an easy win.
The ideal is that we build up a roster of EA-aligned public figures. I’ve been spending some time on that this year, providing even more advice / encouragement to potential public figures than before, and connecting them to my network. The last year has made it more challenging though, as there are larger costs to being an EA-promoting public figure than there were before, so it’s a less attractive prospect; at the same time, a lot of people are now focusing on AI in particular. But there are a number of people who I think could be excellent in this position.
I talk a bit more about some of the challenges in “Will MacAskill should not be the face of EA”:
“First, building up a solid roster of EA public figures will take a while - many years, at least. For example, suppose that someone decides to become a public figure and goes down a book-writing path. Writing a book typically takes a couple of years or more, then there’s a year between finishing the manuscript and publication. And people’s first books are rarely huge hits - someone’s public profile tends to build slowly over time. There are also just a few things that are hard and take time to replicate, like conventional status indicators (being a professor at a prestigious university).
Second, I don’t think we’re ever going to be able to get away from a dynamic where a handful of public figures are far more well-known than all others. Amount of public attention (as measured by, e.g. twitter followers) follows a power law. So if we try to produce a lot of potential public figures, just via the underlying dynamics we’ll probably get a situation where the most-well-known person is a lot more well-known than the next most-well-known person, and so on.
(The same dynamic is why it’s more or less inevitable that much or most funding in EA will come from a small handful of donors. Wealth follows a fat-tailed distribution; the people who are most able to donate will be able to donate far more than most people. Even for GiveWell, which is clearly aimed at “retail” donors, 51% of their funds raised came from a single donor — Open Philanthropy.)
It’s also pretty chance-y who gets the most attention at any one time. WWOTF ended up getting a lot more media attention than The Precipice; this could easily have been the other way around. It certainly didn’t have anything to do with the intrinsic quality of the two books; whereas out-of-control factors like The Precipice being published right after COVID made a significant difference. Toby also got a truly enormous amount of media attention in both 2009 and 2010 (I think he was the most-read news story on the BBC both times); if that had happened now, he’d have a much larger public profile than he currently does.
All this is to say: progress here will take some time. A major success story for this plan would be that, in five years’ time, there are a couple more well-known EA figureheads, in addition to what we have now. That said, there are still things we can to make progress on this in the near term: have other people speaking with the media when they can; having other people do the opening talks at EAGs; and showcasing EAs who already have public platforms, like Toby Ord, or Natalie Cargill, who has an excellent TED talk that’s coming out this year.”
CEA distributes books at scale, right? Seems like offering more different books could boost name recognition of other authors and remove a signal of emphasis on you. This would be far from a total fix, but is very easy to implement.
I haven't kept up with recent books, but back in 2015 I preferred Nick Cooney's intro to EA book to both yours and Peter Singer's, and thought it was a shame it got a fraction of the attention.
Presumably it's easier to sell your own book than someone else's? I assume CEA is able to get a much better rate on The Precipice and What We Owe The Future than How To Be Great At Doing Good or The Most Good You Can Do. The Life You Can Save (the org) even bought the rights to The Life You Can Save (the book) to make it easier to distribute.
[Edit: This may have been a factor too/instead:
I can't find anything similar for Peter's or Nick's books.]
It will always be easier to promote nearby highly popular people than farther away, lesser known people. One person being the "face" is the natural outcome of that dynamic. If you want a diverse field you need to promote other people even when it's more effort in the short run.
Agreed, sorry, I should have been clearer: I was aiming to offer reasons for why Nick Cooney's book may have gotten a fraction of the attention to date (and, to a lesser extent, pushing back a bit on the idea that it would be "very easy to implement").
Have you thought about not doing interviews?
The rest of us can help by telling others Will MacAskill is seeking to divest himself of this reputation whenever we see or hear someone talking about him as if he still wants to be that person (not that he ever did, as evidenced by his above statement, a sentiment I've seen him express before in years past).
I'm glad that you are stepping down from EV UK and focusing more on global priorities and cause prioritisation (and engaging on this forum!). I have a feeling, given your philosophy background, that this will move you to focus more where you have a comparative advantage. I can't wait to read what you have to say about AI!
Thanks! And I agree re comparative advantage!
I'm confused by the disagree-votes on Malde's comment, since it makes sense to me. Can anyone who disagreed explain their reasoning?
I'm much more confused by the single strong (-4) downvote on yours at the time of writing. (And no agree/disagree votes.)
By the way, I can only see one (strong, -7) disagree-vote on Malde's.
Some quick thoughts:
I think it's important to consider that the nature of being on the EVF board over the next few years is likely to be much different than it was pre-FTX. No matter the result of the CC inquiry, EVF needs to consider itself as on the CC's radar for the next few years, and that means extra demands on trustees to handle corporate-governance type stuff. It sounds like a number of projects will spin off (which I think should happen), and the associated logistics will be a major source of board involvement. Plus there's all the FTX fallout, for which Will is recused anyway.
So there are a number of reasons someone might decide to step down, including that the post-FTX role just takes too much of their time, or that they don't have a comparative advantage in light of the new expected composition of the board's workload.
This is an ancillary point, but IMO it would be very unfair to focus too much on what Will personally did or did not know about FTX. There were plenty of other opportunities for other people with far less personal involvement to partially figure this one out, and some did so before the site's failure.
My own major redflag about FTX, for instance, was the employment of Dan Friedberg as their chief regulatory officer, a known liar and fraud-enabler from his involvement with the UltimateBet superusing scandal. Friedberg's executive role at FTX was public record, while the tapes that confirmed the degree of his involvement in the thefts at UltimateBet were leaked in 2013 and were widely publicized in the poker community. Some prominent EAs are even former professional poker players (Igor Kurganov and Liv Boeree).
Even just a few months before FTX's failure, enormous redflags were emerging everywhere. Due to the bankruptcy proceedings of crypto lender Voyager, it became public knowledge in July 2022 that Alameda Research owed them $377 million at the time of bankruptcy. The obvious conclusion was, that like Voyager's other outstanding debtor Three Arrows Capital, Alameda was insolvent (which we now know was in fact the the case). All this was public record and easy to find if you paid a bit of attention to crypto (i.e it was reported in the crypto press), which surely many EAs did at the time.
tl;dr This idea that only a small caste of elite EAs with access to privileged information about FTX could have made some good educated guesses about the potential risks does not stack up: there was plenty in the public record, and I suggest that EA collectively was very happy to look the other way as long as the money was flowing and SBF seemed like a nice EA vegan boy. No doubt Will & other elite EAs deserve some blame, but it would be very easy to try to pin too much on them.
Given how badly and broadly FTX was missed by a variety of actors, it's hard to assign much relative blame to anyone absent circumstances that distinguish their potential blame above the baseline:
Your average EA met neither of these criteria. In contrast, I think these two criteria -- special knowledge and special responsibility -- are multiplicative (i.e., that the potential blame for someone meeting both criteria is much higher than for those who met only one).
Plausible. Also plausible that they also had access to info that decreased their assessment. Perhaps the extra info they had access to even suggested they should decrease their assessment overall. Or perhaps they didn't have access to any significant/relevant extra info.
Agreed. But I think Benjamin_Todd offers a good reflection on this:
I read almost all of the comments on the original EA Forum post linking to the Time article in question. If I recall correctly,Will made a quick comment that he would respond to these kinds of details when he would be at liberty to do so. (Edit: he made that point even more clearly in this shortform post he wrote a few months ago. https://forum.effectivealtruism.org/posts/TeBBvwQH7KFwLT7w5/william_macaskill-s-shortform?commentId=ACDPftuESqkJP9RxP)
I assume he will address these concerns you've mentioned here at the same time he provides a fuller retrospective on the FTX collapse and its fallout.
I mentioned a few months ago that I was planning to resign from the board of EV UK: I’ve now officially done so.
Since last November, I’ve been recused from the board on all matters associated with FTX and related topics, which has ended up being a large proportion of board business. (This is because the recusal affected not just decisions that were directly related to the collapse of FTX, but also many other decisions for which the way EV UK has been affected by the collapse of FTX was important context.) I know I initially said that I’d wait for there to be more capacity, but trustee recruitment has moved more slowly than I’d anticipated, and with the ongoing recusal I didn’t expect to add much capacity for the foreseeable future, so it felt like a natural time to step down.
It’s been quite a ride over the last eleven years. Effective Ventures has grown to a size far beyond what I expected, and I’ve felt privileged to help it on that journey. I deeply respect the rest of the board, and the leadership teams at EV, and I’m glad they’re at the helm.
Some people have asked me what I’m currently working on, and what my plans are. This year has been quite spread over a number of different things, including fundraising, helping out other EA-adjacent public figures, support for GPI, CEA and 80,000 Hours, writing additions to What We Owe The Future and helping with the print textbook version of utilitarianism.net that’s coming out next year. It’s also personally been the toughest year of my life; my mental health has been at its worst in over a decade, and I’ve been trying to deal with that, too.
At the moment, I’m doing three main things:
- Some public engagement, in particular around the WWOTF paperback and foreign language book launches and at EAGxBerlin. This has been and will be lower-key than the media around WWOTF last year, and more focused on in-person events; I’m also more focused on fundraising than I was before.
- Research into "trajectory changes”: in particular, ways of increasing the wellbeing of future generations other than 'standard' existential risk mitigation strategies, in particular on issues that arise even if we solve AI alignment, like digital sentience and the long reflection. I’m also doing some learning to try to get to grips on how to update properly on the latest developments in AI, in particular with respect to the probability of an intelligence explosion in the next decade, and on how hard we should expect AI alignment to be.
- Gathering information for what I should focus on next. In the medium term, I still plan to be a public proponent of EA-as-an-idea, which I think plays to my comparative advantage, and because I’m worried about people neglecting “EA qua EA”. If anything, all the crises faced by EA and by the world in the last year has reminded me of just how deeply I believe in EA as a project, and how the message of taking a thoughtful, humble, and scientific approach to doing good is more important than ever. The precise options I’m considering are still quite wide-ranging, including: a podcast and/or YouTube show and/or substack; a book on effective giving; a book on evidence-based living; or deeper research into the ethics and governance questions that arise even if we solve AI alignment. I hope to decide on that by the end of the year.
Will - of course I have some lingering reservations but I do want to acknowledge how much you've changed and improved my life.
You definitely changed my life by co-creating Centre for Effective Altruism, which played a large role in organizations like Giving What We Can and 80,000 Hours, which is what drew me into EA. I was also very inspired by "Doing Good Better".
To get more personal -- you also changed my life when you told me in 2013 pretty frankly that my original plan to pursue a Political Science PhD wasn't very impactful and that I should consider 80,000 Hours career coaching instead, which I did.
You also changed my life by being open about taking antidepressants, which is ~90% of the reason why I decided to also consider taking antidepressants even though I didn't feel "depressed enough" (I definitely was). I felt like if you were taking them and you seemed normal / fine / not clearly and obviously depressed all the time yet benefitted from them that maybe I would also benefit them (I did). It really shattered a stereotype for me.
You're now an inspiration for me in terms of resilience. Having an impact journey isn't always everything up and up and up all the time. 2022 and 2023 were hard for me. I imagine they were much harder for you -- but you persevere, smile, and continue to show your face. I like that and want to be like that too.
Thank you for all your work, and I'm excited for your ongoing and future projects Will, they sound very valuable! But I hope and trust you will be giving equal attention to your well-being in the near-term. These challenges will need your skills, thoughtfulness and compassion for decades to come. Thank you for being so frank - I know you won't be alone in having found this last year challenging mental health-wise, and it can help to hear others be open about it.
Thanks for all your work over the last 11 years Will, and best of luck on your future projects. I have appreciated your expertise on and support of EA qua EA, and would be excited about you continuing to support that.
Thanks for all of your hard work on EV, Will! I’ve really appreciated your individual example of generosity and commitment, boldness, initiative-taking, and leadership. I feel like a lot of things would happen more slowly or less ambitiously---or not at all---if it weren’t for your ability to inspire others to dive in and act on the courage of their convictions. I think this was really important for Giving What We Can, 80,000 Hours, Centre for Effective Altruism, the Global Priorities Institute, and your books. Inspiration, enthusiasm, and positivity from you has been a force-multiplier on my own work, and in the lives of many others that I have worked with. I wish you all the best in your upcoming projects.
Thank you for all of your hard work over many years, Will. I've really valued your ability to slice through strategic movement-buliding questions, your care and clear communication, your positivity, and your ability to simply inspire massive projects off the ground. I think you've done a lot of good. I'm excited for you to look after yourself, reflect on what's next, and keep working towards a better world.
Thanks so much for all your hard work on CEA/EV over the many years. You have been such a driving force over the years in developing the ideas, the community, and the institutions we needed to help make it all work well. Much of that work over the years has happened through CEA/EV, and before that through Giving What We Can and 80,000 Hours before we'd set up CEA to house them, so this is definitely in some sense the end of an era for you (and for EV). But a lot of your intellectual work and vision has always transcended the particular organisations and I'm really looking forward to much more of that to come!
Thanks so much for your work, Will! I think this is the right decision given the circumstances and that will help EV move in a good direction. I know some mistakes were made but I still want to recognize your positive influence.
I'm eternally grateful for getting me to focus on the question of "how to do the most good with our limited resources?".
I remember how I first heard about EA.
The unassuming flyer taped to the philosophy building wall first caught my eye: “How to do the most good with your career?”
It was October 2013, midterms week at Tufts University, and I was hustling between classes, focused on nothing but grades and graduation. But that disarmingly simple question gave me pause. It felt like an invitation to think bigger.
Curiosity drew me to the talk advertised on the flyer by some Oxford professor named Will MacAskill. I arrived to find just two other students in the room. None of us knew that Will would become so influential.
What followed was no ordinary lecture, but rather a life-changing conversation that has stayed with me for the past decade. Will challenged us to zoom out and consider how we could best use our limited time and talents to positively impact the world. With humility and nuance, he focused not on prescribing answers, but on asking the right questions.
Each of us left that classroom determined to orient our lives around doing the most good. His talk sent me on a winding career journey guided by this question. I dabbled in climate change policy before finding my path in AI safety thanks to 80K's coaching.
Ten years later, I’m still asking myself that question Will posed back in 2013: How can I use my career to do the most good? It shapes every decision I make. (I'm arguably a bit too obsessed with it!). I know countless others can say the same.
So thank you, Will, for inspiring generations of people with your catalytic question. The ripples from that day continue to spread. Excited for what you'll do next!
Given the TIME article, I thought I should give you all an update. Even though I have major issues with the piece, I don’t plan to respond to it right now.
Since my last shortform post, I’ve done a bunch of thinking, updating and planning in light of the FTX collapse. I had hoped to be able to publish a first post with some thoughts and clarifications by now; I really want to get it out as soon as I can, but I won’t comment publicly on FTX at least until the independent investigation commissioned by EV is over. Unfortunately, I think that’s a minimum of 2 months, and I’m still sufficiently unsure on timing that I don’t want to make any promises on that front. I’m sorry about that: I’m aware that this will be very frustrating for you; it’s frustrating for me, too.
Going to be honest and say that I think this is a perfectly sensible response and I would do the same in Will's position.
Thank you for sharing this. I think lots of us would be interested in hearing your take on that post, so it's useful to understand your (reasonable-sounding) rationale of waiting until the independent investigation is done.
Could you share the link to your last shortform post? (it seems like the words "last shortform post" are linking to the Time article again, which I'm assuming is a mistake?)
Sorry - done, thanks!
When is the independent investigation expected to complete?
In the post Will said:
I’ve been thinking hard about whether to publicly comment more on FTX in the near term. Much for the reasons Holden gives here, and for some of the reasons given here, I’ve decided against saying any more than I’ve already said for now.
I’m still in the process of understanding what happened, and processing the new information that comes in every day. I'm also still working through my views on how I and the EA community could and should respond.
I know this might be dissatisfying, and I’m really sorry about that, but I think it’s the right call, and will ultimately lead to a better and more helpful response.
It's not the paramount concern and I doubt you'd want it to be, but I have thought several times that this might be pretty hard for you. I hope you (and all of the Future Fund team and, honestly all of the FTX team) are personally well, with support from people who care about you.
Do you plan to comment in a few weeks, a few months, or not planning to comment publicly? Or is that still to be determined?
Thanks for asking! Still not entirely determined - I’ve been planning some time off over the winter, so I’ll revisit this in the new year.